23 research outputs found

    Adding superimposition to a language semantics

    Get PDF
    Given the denotational semantics of a programming language, we describe a general method to extend the language in a way that it supports a form of emph{superimposition}~---~just in the sense of aspect-oriented programming. In the extended language, the programmer can superimpose additional or alternative functionality (aka advice) onto points along the execution of a program. Adding superimposition to a language semantics comes down to three steps: (i) the semantic functions are elaborated to carry advice; (ii) the semantic equations are turned into `reflective' style so that they can be altered at will; (iii) a construct for binding advice is integrated. We illustrate the approach by representing semantics definitions as interpreters in Haskell

    Semantics-directed implementation of method-call interception

    Get PDF
    We describe a form of method-call interception (MCI) that allows the programmer to superimpose extra functionality onto method calls at run-time. We provide a reference semantics and a reference implementation for corresponding language constructs. The setup applies to class-based, statically typed, compiled languages such as Java. The semantics of MCI is used to direct a language implementation with a number of valuable properties: simplicity of the implementational model and run-time adaptation capabilities and static type safety and separate compilation and reasonable performance. Our implementational development employs sourcecode instrumentation. We start from a naive implementational model, which is subsequently refined to optimise program execution. The implementation is assessed via benchmarks

    Parse-tree annotations meet re-engineering concerns

    Get PDF
    We characterise a computational model for processing annotated parse trees. The model is basically rewriting-based with specific provisions for dealing with annotations along the ordinary rewrite steps. Most notably, there are progression methods, which define a default for annotating the results of rewriting. There are also access methods, which can be used in the rewrite rules in order to retrieve annotations from the input and to establish annotations in the output. Our approach extends the basic rewriting paradigm with support for the separation of concerns that involve annotations. This is motivated in the context of transformations for software re-engineering where annotations can be used to implement concerns such as layout preservation and reversible preprocessing

    Language Support for Megamodel Renarration

    Get PDF
    Megamodels may be difficult to understand because they reside at a high level of abstraction and they are graph-like structures that do not immediately provide means of order and decomposition as needed for successive examination and comprehension. To improve megamodel comprehension, we introduce modeling features for the recreation, in fact, renarration of megamodels. Our approach relies on certain operators for extending, instantiating, and otherwise modifying megamodels. We illustrate the approach in the context of megamodeling for Object/XML mapping (also known as XML data binding)

    Deriving tolerant grammars from a base-line grammar

    Get PDF
    A grammar-based approach to tool development in re- and reverse engineering promises precise structure awareness, but it is problematic in two respects. Firstly, it is a considerable up-front investment to obtain a grammar for a relevant language or cocktail of languages. Existing work on grammar recovery addresses this concern to some extent. Secondly, it is often not feasible to insist on a precise grammar, e.g., when different dialects need to be covered. This calls for tolerant grammars. In this paper, we provide a well-engineered approach to the derivation of tolerant grammars, which is based on previous work on error recovery, fuzzy parsing, and island grammars. The technology of this paper has been used in a complex Cobol restructuring project on several millions of lines of code in different Cobol dialects. Our approach is founded on an approximation relation between a tolerant grammar and a base-line grammar which serves as a point of reference. Thereby, we avoid false positives and false negatives when parsing constructs of interest in a tolerant mode. Our approach accomplishes the effective derivation of a tolerant grammar from the syntactical structure that is relevant for a certain re- or reverse engineering tool. To this end, the productions for the constructs of interest are reused from the base-line grammar together with further productions that are needed for completion

    A Unified Format for Language Documents

    Get PDF
    We have analyzed a substantial number of language documentation artifacts, including language standards, language specifications, language reference manuals, as well as internal documents of standardization bodies. We have reverse-engineered their intended internal structure, and compared the results. The Language Document Format (LDF), was developed to specifically support the documentation domain. We have also integrated LDF into an engineering discipline for language documents including tool support, for example, for rendering language documents, extracting grammars and samples, and migrating existing documents into LDF. The definition of LDF, tool support for LDF, and LDF applications are freely available through SourceForge

    Strategic polymorphism requires just two combinators!

    Get PDF
    In previous work, we introduced the notion of functional strategies: first-class generic functions that can traverse terms of any type while mixing uniform and type-specific behaviour. Functional strategies transpose the notion of term rewriting strategies (with coverage of traversal) to the functional programming paradigm. Meanwhile, a number of Haskell-based models and combinator suites were proposed to support generic programming with functional strategies. In the present paper, we provide a compact and matured reconstruction of functional strategies. We capture strategic polymorphism by just two primitive combinators. This is done without commitment to a specific functional language. We analyse the design space for implementational models of functional strategies. For completeness, we also provide an operational reference model for implementing functional strategies (in Haskell). We demonstrate the generality of our approach by reconstructing representative fragments of the Strafunski library for functional strategies

    Recovering grammar relationships for the Java language specification

    Get PDF
    Grammar convergence is a method that helps in discovering relationships between different grammars of the same language or different language versions. The key element of the method is the operational, transformation-based representation of those relationships. Given input grammars for convergence, they are transformed until they are structurally equal. The transformations are composed from primitive operators; properties of these operators and the composed chains provide quantitative and qualitative insight into the relationships between the grammars at hand. We describe a refined method for grammar convergence, and we use it in a major study, where we recover the relationships between all the grammars that occur in the different versions of the Java Language Specification (JLS). The relationships are represented as grammar transformation chains that capture all accidental or intended differences between the JLS grammars. This method is mechanized and driven by nominal and structural differences between pairs of grammars that are subject to asymmetric, binary convergence steps. We present the underlying operator suite for grammar transformation in detail, and we illustrate the suite with many examples of transformations on the JLS grammars. We also describe the extraction effort, which was needed to make the JLS grammars amenable to automated processing. We include substantial metadata about the convergence process for the JLS so that the effort becomes reproducible and transparent

    Strongly typed heterogeneous collections

    Get PDF
    A heterogeneous collection is a datatype that is capable of storing data of different types, while providing operations for look-up, update, iteration, and others. There are various kinds of heterogeneous collections, differing in representation, invariants, and access operations. We describe HList --- a Haskell library for strongly typed heterogeneous collections including extensible records. We illustrate HList's benefits in the context of type-safe database access in Haskell. The HList library relies on common extensions of Haskell 98. Our exploration raises interesting issues regarding Haskell's type system, in particular, avoidance of overlapping instances, and reification of type equality and type unificatio

    Design patterns and pspects : modular designs with seamless run-time integration

    Get PDF
    Some solutions proposed in the original design pattern literature were shaped by techniques as well as language deficiencies from object-oriented software development. However, new modularity constructs, composition and transformation mechanisms offered by aspect-oriented programming address deficiencies of object-oriented modeling. This suggests classical design pattern solutions to be revisited. In our paper we point out that aspect-oriented programming not only allows for alternative representations of proposed solutions, but also for better solutions in the first place. We advocate a native aspect-oriented approach to design patterns that emphasizes on improving design pattern solutions both during development and at run-time. We use a simple yet effective method to analyze and describe different solutions on the basis of variation points, fixed parts, variable parts, and optional glue, employing dynamic run-time weaving
    corecore